|
|
|
| Lightweight Unsupervised Multi-exposure Light Field Image Fusion Based on Full-Aperture Estimation |
| LI Yulong1, CHEN Yeyao1, JIN Chongchong2, JIANG Gangyi1 |
1. Faculty of Electrical Engineering and Computer Science, Ning-bo University, Ningbo 315211; 2. College of Science and Technology, Ningbo University, Ningbo 315300 |
|
|
|
|
Abstract Multi-exposure light field(LF) image fusion is an effective way to overcome the limited dynamic range of LF cameras. However, due to the high-dimensional structure of LFs, existing methods struggle to efficiently process multi-exposure LF images. To address this issue, a method for lightweight unsupervised multi-exposure LF image fusion based on full-aperture estimation(MELFF-FAE) is proposed. First, the representative scene information is extracted from the central sub-aperture image(SAI) to reduce the heavy computational burden caused by the input of the full LF image. Second, a full-aperture weight estimation module is designed to obtain the fusion weight of the full LF image by mining the LF angular information. The difference between the boundary SAIs and the central SAI is utilized to construct a full weight map in the feature space. Finally, the weight map is multiplied with the source image to generate a fused LF image. Experimental results demonstrate that MELFF-FAE can generate LF images with high contrast and detailed textures while preserving good angular consistency. Moreover, compared to existing representative methods, MELFF-FAE achieves superior results in both quantitative and qualitative comparisons while significantly reducing the computational burden.
|
|
Received: 10 September 2025
|
|
|
| Fund:National Natural Science Foundation of China(No.62271276,62401301,62501322), Natural Science Foundation of Zhejiang Province(No.LQ24F010002) |
|
Corresponding Authors:
JIANG Gangyi, Ph.D., professor. His research interests include deep learning, visual perception and coding, and light field computational imaging.
|
| About author:: LI Yulong, Master student. His research interests include deep learning and multi-exposure light field image fusion.
CHEN Yeyao, Ph.D., lecturer. His research interests include deep learning and light field image processing.
JIN Chongchong, Ph.D., lecturer. Her research interests include deep learning and image quality assessment. |
|
|
|
[1] 方璐,戴琼海.计算光场成像.光学学报, 2020, 40(1): 9-30. (FANG L, DAI Q H.Computational Light Field Imaging. Acta Optica Sinica, 2020, 40(1): 9-30.) [2] SEPAS-MOGHADDAM A, ETEMAD A, PEREIRA F,et al. CapsField: Light Field-Based Face and Expression Recognition in the Wild Using Capsule Routing. IEEE Transactions on Image Proce-ssing, 2021, 30: 2627-2642. [3] DING Y Q, LI Z, CHEN Z, et al. Full-Volume 3D Fluid Flow Reconstruction with Light Field PIV. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(7): 8405-8418. [4] 苗源,刘畅,邱钧.基于神经辐射场的光场角度域超分辨.光学学报, 2023, 43(14): 93-102. (MIAO Y, LIU C, QIU J.Neural Radiance Field-Based Light Field Super-Resolution in Angular Domain. Acta Optica Sinica, 2023, 43(14): 93-102.) [5] ZHAO C, JEON B.Compact Representation of Light Field Data for Refocusing and Focal Stack Reconstruction Using Depth Adaptive Multi-CNN. IEEE Transactions on Computational Imaging, 2024, 10: 170-180. [6] 高隽,王丽娟,张旭东,等.光场深度估计方法的对比研究.模式识别与人工智能, 2016, 29(9): 769-779. (GAO J, WANG L J, ZHANG X D, et al. Comparative Study of Light Field Depth Estimation Methods. Pattern Recognition and Artificial Intelligence, 2016, 29(9): 769-779.) [7] ZHOU W H, LIN L L, HONG Y J, et al. Beyond Photometric Consistency: Geometry-Based Occlusion-Aware Unsupervised Light Field Disparity Estimation. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(11): 15660-15674. [8] 杨珍妹,李华锋,张亚飞.面向高动态范围成像的内容恢复和鬼影抑制网络.模式识别与人工智能, 2024, 37(4): 313-327. (YANG Z M, LI H F, ZHANG Y F.Missing Content Restoration and Ghosting Suppression Network for High Dynamic Range Imaging. Pattern Recognition and Artificial Intelligence, 2024, 37(4): 313-327.) [9] ZHANG X C.Benchmarking and Comparing Multi-exposure Image Fusion Algorithms. Information Fusion, 2021, 74: 111-131. [10] BHATEJA V, PATEL H, KRISHN A, et al. Multimodal Medical Image Sensor Fusion Framework Using Cascade of Wavelet and Contourlet Transform Domains. IEEE Sensors Journal, 2015, 15(12): 6783-6790. [11] LIU Y, LIU S P, WANG Z F.A General Framework for Image Fusion Based on Multi-scale Transform and Sparse Representation. Information Fusion, 2015, 24: 147-164. [12] MERTENS T, KAUTZ J, VAN REETH F.Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography. Computer Graphics Forum, 2009, 28(1): 161-171. [13] 祝新力,张雅声,方宇强,等.多曝光图像融合方法综述.激光与光电子学进展, 2023, 60(22): 23-40. (ZHU X L, ZHANG Y S, FANG Y Q, et al. Review of Multi-exposure Image Fusion Methods. Laser & Optoelectronics Progress, 2023, 60(22): 23-40.) [14] 刘铎,张国印,史一岐,等.基于融合曲线的零样本红外与可见光图像融合方法.模式识别与人工智能, 2025, 38(3): 268-279. (LIU D, ZHANG G Y, SHI Y Q, et al. Zero-Shot Infrared and Visible Image Fusion Based on Fusion Curve. Pattern Recognition and Artificial Intelligence, 2025, 38(3): 268-279.) [15] CAI J R, GU S H, ZHANG L.Learning a Deep Single Image Contrast Enhancer from Multi-exposure Images. IEEE Transactions on Image Processing, 2018, 27(4): 2049-2062. [16] PAN Z Y, YU M, JIANG G Y, et al. Multi-exposure High Dynamic Range Imaging with Informative Content Enhanced Network. Neurocomputing, 2020, 386: 147-164. [17] CHEN Y Y, JIANG G Y, YU M, et al. HDR Light Field Imaging of Dynamic Scenes: A Learning-Based Method and a Benchmark Dataset. Pattern Recognition, 2024, 150. DOI: 10.1016/j.patcog.2024.110313. [18] WANG Y Q, WANG L G, WU G C, et al. Disentangling Light Fields for Super-Resolution and Disparity Estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(1): 425-443. [19] VAN DUONG V, HUU T N, YIM J, et al. Light Field Image Super-Resolution Network via Joint Spatial-Angular and Epipolar Information. IEEE Transactions on Computational Imaging, 2023, 9: 350-366. [20] CHEN Y Y, JIANG G Y, JIANG Z D, et al. Deep Light Field Super-Resolution Using Frequency Domain Analysis and Semantic Prior. IEEE Transactions on Multimedia, 2022, 24: 3722-3737. [21] XU H Y, JIANG G Y, YU M, et al. Tensor Product and Tensor-Singular Value Decomposition Based Multi-exposure Fusion of Images. IEEE Transactions on Multimedia, 2022, 24: 3738-3753. [22] LI S T, KANG X D, HU J W.Image Fusion with Guided Filtering. IEEE Transactions on Image Processing, 2013, 22(7): 2864-2875. [23] LIU Y, WANG Z F.Dense SIFT for Ghost-Free Multi-exposure Fusion. Journal of Visual Communication and Image Representation, 2015, 31: 208-224. [24] LEE S H, PARK J S, CHO N I.A Multi-exposure Image Fusion Based on the Adaptive Weights Reflecting the Relative Pixel Intensity and Global Gradient//Proc of the IEEE International Confe-rence on Image Processing. Washington, USA: IEEE, 2018: 1737-1741. [25] MA K D, LI H, YONG H W, et al. Robust Multi-exposure Image Fusion: A Structural Patch Decomposition Approach. IEEE Transactions on Image Processing, 2017, 26(5): 2519-2532. [26] LI H, MA K D, YONG H W, et al. Fast Multi-scale Structural Patch Decomposition for Multi-exposure Image Fusion. IEEE Tran-sactions on Image Processing, 2020, 29: 5805-5816. [27] LI H, CHAN T N, QI X B, et al. Detail-Preserving Multi-exposure Fusion with Edge-Preserving Structural Patch Decomposition. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 31(11): 4293-4304. [28] ULUCAN O, ULUCAN D, TURKAN M.Ghosting-Free Multi-exposure Image Fusion for Static and Dynamic Scenes. Signal Processing, 2023, 202. DOI: 10.1016/j.sigpro.2022.108774. [29] XU H, MA J Y, ZHANG X P.MEF-GAN: Multi-exposure Image Fusion via Generative Adversarial Networks. IEEE Transactions on Image Processing, 2020, 29: 7203-7216. [30] DENG X, ZHANG Y T, XU M, et al. Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution. IEEE Transactions on Image Processing, 2021, 30: 3098-3112. [31] LIU J Y, WU G Y, LUAN J S, et al. HoLoCo: Holistic and Local Contrastive Learning Network for Multi-exposure Image Fusion. Information Fusion, 2023, 95: 237-249. [32] PRABHAKAR K R, SRIKAR V S, BABU R V.DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs//Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2017: 4724-4732. [33] MA K D, ZENG K, WANG Z.Perceptual Quality Assessment for Multi-exposure Image Fusion. IEEE Transactions on Image Processing, 2015, 24(11): 3345-3356. [34] MA K D, DUANMU Z F, ZHU H W, et al. Deep Guided Lear-ning for Fast Multi-exposure Image Fusion. IEEE Transactions on Image Processing, 2020, 29: 2808-2819. [35] QU L H, LIU S L, WANG M N, et al. TransMEF: A Transformer-Based Multi-exposure Image Fusion Framework Using Self-Supervised Multi-task Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 2022, 36(2): 2126-2134. [36] ZHENG K W, HUANG J, YU H, et al. Efficient Multi-exposure Image Fusion via Filter-Dominated Fusion and Gradient-Driven Unsupervised Learning//Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Washington, USA: IEEE, 2023: 2805-2814. [37] PENG P, JING Z L, PAN H, et al. CurveMEF: Multi-exposure Fusion via Curve Embedding Network. Neurocomputing, 2024. DOI: 10.1016/j.neucom.2024.127915. [38] XU H, MA J Y, JIANG J J, et al. U2Fusion: A Unified Unsupervised Image Fusion Network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518. [39] ZHANG H, XU H, XIAO Y, et al. Rethinking the Image Fusion: A Fast Unified Image Fusion Network Based on Proportional Maintenance of Gradient and Intensity. Proceedings of the AAAI Confe-rence on Artificial Intelligence, 2020, 34(7): 12797-12804. [40] YANG G, LI J, GAO X B, A Dual Domain Multi-exposure Image Fusion Network Based on Spatial-Frequency Integration. Neurocomputing, 2024, 598. DOI: 10.1016/j.neucom.2024.128146. [41] XU H, YI X P, LU C, et al. URFusion: Unsupervised Unified Degradation-Robust Image Fusion Network. IEEE Transactions on Image Processing, 2025, 34: 5803-5818. [42] YI X P, TANG L F, ZHANG H, et al. Diff-IF: Multi-modality Image Fusion via Diffusion Model with Fusion Knowledge Prior. Information Fusion, 2024, 110. DOI: 10.1016/j.inffus.2024.102450. [43] 李玉龙,陈晔曜,崔跃利,等.LF-UMTI:基于多尺度空角交互的无监督多曝光光场图像融合.光电工程, 2024, 51(6): 90-105. (LI Y L, CHEN Y Y, CUI Y L, et al. LF-UMTI: Unsupervised Multi-exposure Light Field Image Fusion Based on Multi-scale Spatial-Angular Interaction. Opto-Electronic Engineering, 2024, 51(6): 90-105.) [44] ZAMIR S W, ARORA A, KHAN S, et al. Restormer: Efficient Transformer for High-Resolution Image Restoration//Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2022: 5718-5729. |
|
|
|